Skip to content

release: 0.11.0#343

Open
stainless-app[bot] wants to merge 61 commits intomainfrom
release-please--branches--main--changes--next
Open

release: 0.11.0#343
stainless-app[bot] wants to merge 61 commits intomainfrom
release-please--branches--main--changes--next

Conversation

@stainless-app
Copy link
Copy Markdown
Contributor

@stainless-app stainless-app Bot commented May 4, 2026

Automated Release PR

0.11.0 (2026-05-05)

Full Changelog: v0.10.4...v0.11.0

Features

  • openai_agents: expose real usage, response_id, plumb previous_response_id, opt-in prompt_cache_key for stateful responses and prompt caching (#335) (ba5d64b)

Chores

  • internal: reformat pyproject.toml (76e0299)
  • internal: version bump (0d318ad)

This pull request is managed by Stainless's GitHub App.

The semver version number is based on included commit messages. Alternatively, you can manually set the version number in the title of this pull request.

For a better experience, it is recommended to use either rebase-merge or squash-merge when merging this pull request.

🔗 Stainless website
📚 Read the docs
🙋 Reach out for help or questions

Greptile Summary

  • Adds batched span dispatch (on_spans_start / on_spans_end) to the tracing processor interface and SGPAsyncTracingProcessor, collapsing per-span HTTP calls into a single upsert_batch per drain cycle.
  • SGPAsyncTracingProcessor.on_span_start now delegates to on_spans_start, with the default interface fallback fanning out to single-span methods in parallel.
  • Version bumped from 0.10.4 → 0.11.0 with corresponding changelog, manifest, and pyproject updates.

Confidence Score: 3/5

Three P1 issues flagged in prior review threads remain unaddressed; the batching logic is correct but surrounding error-handling and disabled-path bugs carry real runtime risk.

Multiple P1 findings from prior threads are still present: shutdown() crashes with AttributeError when disabled=True and _spans is non-empty; _spans is mutated before upsert_batch confirms success leaving orphaned end-only events on network failure; and the assert guard in _process_items is silently removed under python -O. Any one of these caps confidence at 4/5; all three together pull it to 3/5.

src/agentex/lib/core/tracing/processors/sgp_tracing_processor.py (shutdown disabled-path crash, stale _spans on HTTP failure) and src/agentex/lib/core/tracing/span_queue.py (assert stripped by -O)

Important Files Changed

Filename Overview
src/agentex/lib/core/tracing/processors/sgp_tracing_processor.py Adds on_spans_start/on_spans_end overrides for true HTTP batching; three pre-existing P1 issues flagged in prior review threads remain unresolved: shutdown() crashes when disabled with in-flight spans, _spans is mutated before the upsert call can confirm success, and the assert guard in span_queue.py is stripped by -O.
src/agentex/lib/core/tracing/processors/tracing_processor_interface.py Adds default on_spans_start/on_spans_end fan-out implementations with per-span exception logging; backward-compatible for existing single-span processors.
src/agentex/lib/core/tracing/span_queue.py Refactors _process_items to group spans by processor and dispatch via batched methods; the assert used to guard mixed-event-type inputs is stripped by the -O interpreter flag (flagged in prior thread).
tests/lib/core/tracing/processors/test_sgp_tracing_processor.py Adds batched-dispatch tests verifying a single upsert_batch call per drain; coverage is good.
tests/lib/core/tracing/processors/test_tracing_processor_interface.py New test file verifying the default fan-out behavior, failure isolation, and per-failure logging of the base class batched methods.
tests/lib/core/tracing/test_span_queue.py Extends span queue tests with batched-dispatch coverage and a mixed-event-type precondition test.

Sequence Diagram

sequenceDiagram
    participant C as Caller
    participant Q as AsyncSpanQueue
    participant PI as AsyncTracingProcessor (base)
    participant SGP as SGPAsyncTracingProcessor

    Note over Q: Drain loop collects batch
    C->>Q: enqueue(START, span, [sgp_proc])
    Q->>Q: _drain_loop accumulates batch
    Q->>Q: _process_items(starts) — groups by processor

    Q->>SGP: on_spans_start([span1, span2, …])
    SGP->>SGP: _spans[id] = sgp_span (for each span)
    SGP-->>Q: upsert_batch(items=[…]) ← single HTTP call

    Note over Q: END batch processed after STARTs
    Q->>SGP: on_spans_end([span1, span2, …])
    SGP->>SGP: _spans.pop(id) (for each span)
    SGP-->>Q: upsert_batch(items=[…]) ← single HTTP call

    Note over PI: Default fallback (non-overriding processors)
    Q->>PI: on_spans_start([span1, span2])
    PI->>PI: asyncio.gather(on_span_start(s) for s in spans)
    PI-->>Q: per-span results logged on error
Loading

Reviews (11): Last reviewed commit: "release: 0.11.0" | Re-trigger Greptile

@stainless-app stainless-app Bot force-pushed the release-please--branches--main--changes--next branch from 364e2b2 to b1d20d6 Compare May 4, 2026 19:56
@stainless-app stainless-app Bot force-pushed the release-please--branches--main--changes--next branch from b1d20d6 to b837eeb Compare May 4, 2026 20:22
@stainless-app stainless-app Bot force-pushed the release-please--branches--main--changes--next branch from b837eeb to ac067a6 Compare May 4, 2026 22:16
@stainless-app stainless-app Bot force-pushed the release-please--branches--main--changes--next branch from ac067a6 to 16b956f Compare May 4, 2026 22:51
@stainless-app stainless-app Bot force-pushed the release-please--branches--main--changes--next branch from 16b956f to 2ea4386 Compare May 4, 2026 23:22
@stainless-app stainless-app Bot force-pushed the release-please--branches--main--changes--next branch from 2ea4386 to 3b9a668 Compare May 5, 2026 00:22
Comment on lines +107 to +111
event_type = items[0].event_type
assert all(i.event_type == event_type for i in items), (
"_process_items requires all items to share the same event_type; "
"callers must split START and END batches before dispatching."
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 assert in production guard defeats data-corruption protection

The code comment correctly identifies this as a potential "silent data-corruption bug," but using assert for the guard means it is silently stripped when Python runs with the -O (optimize) flag. If a caller ever passes a mixed-event-type list, START and END spans would be fed to the wrong batched method with no warning. Use an explicit if/raise instead.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agentex/lib/core/tracing/span_queue.py
Line: 107-111

Comment:
**`assert` in production guard defeats data-corruption protection**

The code comment correctly identifies this as a potential "silent data-corruption bug," but using `assert` for the guard means it is silently stripped when Python runs with the `-O` (optimize) flag. If a caller ever passes a mixed-event-type list, START and END spans would be fed to the wrong batched method with no warning. Use an explicit `if/raise` instead.

How can I resolve this? If you propose a fix, please make it concise.

Fix in Cursor Fix in Claude Code Fix in Codex

Comment on lines +141 to 163
sgp_spans: list[SGPSpan] = []
for span in spans:
self._add_source_to_span(span)
sgp_span = create_span(
name=span.name,
span_type=_get_span_type(span),
span_id=span.id,
parent_id=span.parent_id,
trace_id=span.trace_id,
input=span.input,
output=span.output,
metadata=span.data,
)
sgp_span.start_time = span.start_time.isoformat() # type: ignore[union-attr]
self._spans[span.id] = sgp_span
sgp_spans.append(sgp_span)

if self.disabled:
logger.warning("SGP is disabled, skipping span upsert")
return
# TODO(AGX1-198): Batch multiple spans into a single upsert_batch call
# instead of one span per HTTP request.
# https://linear.app/scale-epd/issue/AGX1-198/actually-use-sgp-batching-for-spans
await self.sgp_async_client.spans.upsert_batch( # type: ignore[union-attr]
items=[sgp_span.to_request_params()]
items=[s.to_request_params() for s in sgp_spans]
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 _spans populated before upsert — stale entries on HTTP failure

Spans are added to self._spans before the upsert_batch HTTP call (lines 155–156). If the batch upsert throws (network error, server 5xx), the exception is caught upstream by the queue's _handle, but _spans already holds entries for spans whose start event was never delivered to SGP. A subsequent on_spans_end will find those spans, update them, and send end-only upserts — orphaned end events with no matching start on the server.

The old single-span code registered the span in _spans only after a successful upsert, so failures were cleanly skipped on the end path. Consider populating _spans only after confirming the batch call succeeded, or rolling back entries on exception.

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agentex/lib/core/tracing/processors/sgp_tracing_processor.py
Line: 141-163

Comment:
**`_spans` populated before upsert — stale entries on HTTP failure**

Spans are added to `self._spans` before the `upsert_batch` HTTP call (lines 155–156). If the batch upsert throws (network error, server 5xx), the exception is caught upstream by the queue's `_handle`, but `_spans` already holds entries for spans whose start event was never delivered to SGP. A subsequent `on_spans_end` will find those spans, update them, and send end-only upserts — orphaned end events with no matching start on the server.

The old single-span code registered the span in `_spans` only after a successful upsert, so failures were cleanly skipped on the end path. Consider populating `_spans` only after confirming the batch call succeeded, or rolling back entries on exception.

How can I resolve this? If you propose a fix, please make it concise.

Fix in Cursor Fix in Claude Code Fix in Codex

@stainless-app stainless-app Bot force-pushed the release-please--branches--main--changes--next branch from 3b9a668 to b702eb9 Compare May 5, 2026 01:22
Comment on lines +154 to 163
sgp_span.start_time = span.start_time.isoformat() # type: ignore[union-attr]
self._spans[span.id] = sgp_span
sgp_spans.append(sgp_span)

if self.disabled:
logger.warning("SGP is disabled, skipping span upsert")
return
# TODO(AGX1-198): Batch multiple spans into a single upsert_batch call
# instead of one span per HTTP request.
# https://linear.app/scale-epd/issue/AGX1-198/actually-use-sgp-batching-for-spans
await self.sgp_async_client.spans.upsert_batch( # type: ignore[union-attr]
items=[sgp_span.to_request_params()]
items=[s.to_request_params() for s in sgp_spans]
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 shutdown() crashes with AttributeError when disabled=True and spans are in-flight

on_spans_start now populates self._spans (line 155) before the if self.disabled: return guard (line 158). If any spans are started but not yet ended when shutdown() is called in disabled mode, it reaches self.sgp_async_client.spans.upsert_batch(...) where self.sgp_async_client is None, triggering an AttributeError. Before this PR the disabled path returned before populating _spans, so _spans was always empty at shutdown time and this was never triggered in practice. The fix is to either move the self._spans[span.id] = sgp_span assignment after the if self.disabled guard, or add an early if self.disabled: return check at the top of shutdown() (mirroring how on_spans_end handles it at line 184).

Prompt To Fix With AI
This is a comment left during a code review.
Path: src/agentex/lib/core/tracing/processors/sgp_tracing_processor.py
Line: 154-163

Comment:
**`shutdown()` crashes with `AttributeError` when `disabled=True` and spans are in-flight**

`on_spans_start` now populates `self._spans` (line 155) **before** the `if self.disabled: return` guard (line 158). If any spans are started but not yet ended when `shutdown()` is called in disabled mode, it reaches `self.sgp_async_client.spans.upsert_batch(...)` where `self.sgp_async_client` is `None`, triggering an `AttributeError`. Before this PR the disabled path returned before populating `_spans`, so `_spans` was always empty at shutdown time and this was never triggered in practice. The fix is to either move the `self._spans[span.id] = sgp_span` assignment after the `if self.disabled` guard, or add an early `if self.disabled: return` check at the top of `shutdown()` (mirroring how `on_spans_end` handles it at line 184).

How can I resolve this? If you propose a fix, please make it concise.

Fix in Cursor Fix in Claude Code Fix in Codex

@stainless-app stainless-app Bot force-pushed the release-please--branches--main--changes--next branch from b702eb9 to 04eafa5 Compare May 5, 2026 03:22
@stainless-app stainless-app Bot force-pushed the release-please--branches--main--changes--next branch from 04eafa5 to 65af241 Compare May 5, 2026 04:22
@stainless-app stainless-app Bot force-pushed the release-please--branches--main--changes--next branch from 65af241 to 0a94229 Compare May 5, 2026 05:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant